Skip to content

Conversation

@Mercykid-bash
Copy link
Contributor

@Mercykid-bash Mercykid-bash commented Nov 26, 2025

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a 'mix-placement' feature, which appears to allow shared experts to be treated as regular experts, affecting model configuration, expert map management, and MoE operations. A new patch for DeepSeek V2/V3 models is added to support this feature. The changes are extensive and touch several parts of the codebase.

My review has identified a few critical issues that need to be addressed. There's a potential for a runtime error in vllm_ascend/eplb/adaptor/vllm_adaptor.py due to an incorrect in-place tensor copy with mismatched shapes. Another critical bug was found in vllm_ascend/ops/fused_moe/moe_mlp.py where the logic for calculating group sizes is flawed and can lead to index errors or incorrect results. Additionally, a hardcoded 'magic number' in vllm_ascend/ops/fused_moe/experts_selector.py reduces code clarity and should be replaced with a named constant.

Other changes, such as refactoring and bug fixes in vllm_ascend/ops/fused_moe/fused_moe.py, seem correct and improve the code.

value=-1
)
self.expert_map_per_layer[layer_id].copy_(updated_expert_map_padded)
self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The in-place copy self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map) will raise a RuntimeError if updated_expert_map has a different shape than self.expert_map_per_layer_cpu[layer_id]. The logic for padding updated_expert_map for the device tensor self.expert_map_per_layer suggests that shape mismatches are expected. The CPU-side map should be handled in a way that accommodates shape changes to avoid crashes. Reassigning the tensor, as was done previously, is a safer approach.

Suggested change
self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map)
self.expert_map_per_layer_cpu[layer_id] = updated_expert_map.clone()

Comment on lines +115 to +116
group_diff = torch.diff(group_list)
new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The calculation of new_group from group_list (which appears to be a cumulative sum of group sizes) is incorrect and can lead to runtime errors or incorrect behavior.

  1. If group_list contains only one element, torch.diff(group_list) will be empty, and group_diff[0] will raise an IndexError.
  2. If group_list is [g1, g1+g2, ...], group_diff will be [g2, g3, ...]. The current logic torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0) would produce [g2, g2, g3, ...], which is incorrect as the first group size should be g1.

The correct way to get group sizes from a cumulative sum tensor is to use torch.diff with the first element of group_list prepended.

Suggested change
group_diff = torch.diff(group_list)
new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0)
new_group = torch.cat((group_list[0:1], torch.diff(group_list)))

Comment on lines +98 to +101
pad_shared_expert_weights = torch.full((topk_weights.shape[0], 1),
0.4,
dtype=topk_weights.dtype,
device=topk_weights.device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The value 0.4 is used as a hardcoded weight for the padded shared expert. This "magic number" makes the code harder to understand and maintain. It's unclear why this specific value is chosen and what its implications are, especially concerning weight normalization which typically expects weights to sum to 1. This value should be defined as a named constant with an explanatory comment, or passed as a parameter to the function to improve clarity and maintainability.

@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@Mercykid-bash Mercykid-bash changed the title mix-placement [Feat] Enable mix palacement for DSR1 Dec 1, 2025
Comment on lines +193 to +198
# self.global_redundant_expert_num = (
# self.expert_load_balancer.get_global_redundant_expert_num())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused code should be removed rather than commented out.

@SlightwindSec
Copy link
Contributor

Please sign your commits with -s (e.g., git commit -s -m "message") to pass the DCO check. You can amend your current commit using git commit --amend -s.

@github-actions
Copy link

github-actions bot commented Dec 4, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

mercykid and others added 4 commits December 4, 2025 17:09
@Mercykid-bash Mercykid-bash force-pushed the mix_placement branch 2 times, most recently from 7d6c8d9 to d6664ae Compare December 4, 2025 09:15
self.global_redundant_expert_num = (
self.expert_load_balancer.get_global_redundant_expert_num())
# self.global_redundant_expert_num = (
# self.expert_load_balancer.get_global_redundant_expert_num())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commented-out code is recommended to be deleted.

shared_out = self._shared_experts(hidden_states)
if self._shared_experts is None:
shared_out = None
else:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is recommended to use ternary expressions to eliminate redundant branches and improve code conciseness.

# See DeepseekV2DecoderLayer for more details.
if hidden_states.dtype != torch.float16:
final_hidden_states *= self.routed_scaling_factor
elif self.shared_experts is not None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two branches that seem to have no direct correlation in conditions or internal execution, which can easily lead to maintenance confusion.

final_hidden_states, 0
)
final_hidden_states = final_hidden_states[:num_tokens]
elif self.tp_size > 1:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here as well, there are two branches that seem to have no direct connection in terms of conditions or internal execution, which can easily lead to maintenance confusion.

if name_mapped not in params_dict.keys():
continue
param = params_dict[name_mapped]
weight_loader = typing.cast(Callable[..., bool], param.weight_loader)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forced type conversion does not verify whether weight_loader actually supports the call.

default_weight_loader)
weight_loader(param, loaded_weight)
if not is_fuse_shared_experts_layer:
loaded_params.add(name)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The validity of the name was not verified, potentially allowing the addition of None or invalid names, which could contaminate the result set.

@github-actions
Copy link

github-actions bot commented Dec 4, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants